32 research outputs found

    The University of Sussex-Huawei locomotion and transportation dataset for multimodal analytics with mobile devices

    Get PDF
    Scientific advances build on reproducible research which need publicly available benchmark datasets. The computer vision and speech recognition communities have led the way in establishing benchmark datasets. There are much less datasets available in mobile computing, especially for rich locomotion and transportation analytics. This paper presents a highly versatile and precisely annotated large-scale dataset of smartphone sensor data for multimodal locomotion and transportation analytics of mobile users. The dataset comprises 7 months of measurements, collected from all sensors of 4 smartphones carried at typical body locations, including the images of a body-worn camera, while 3 participants used 8 different modes of transportation in the southeast of the United Kingdom, including in London. In total 28 context labels were annotated, including transportation mode, participantā€™s posture, inside/outside location, road conditions, traffic conditions, presence in tunnels, social interactions, and having meals. The total amount of collected data exceed 950 GB of sensor data, which corresponds to 2812 hours of labelled data and 17562 km of traveled distance. We present how we set up the data collection, including the equipment used and the experimental protocol. We discuss the dataset, including the data curation process, the analysis of the annotations and of the sensor data. We discuss the challenges encountered and present the lessons learned and some of the best practices we developed to ensure high quality data collection and annotation. We discuss the potential applications which can be developed using this large-scale dataset. In particular, we present how a machine-learning system can use this dataset to automatically recognize modes of transportations. Many other research questions related to transportation analytics, activity recognition, radio signal propagation and mobility modelling can be adressed through this dataset. The full dataset is being made available to the community, and a thorough preview is already publishe

    Competitive Live Evaluation of Activity-recognition Systems

    Get PDF
    In order to ensure the validity and usability of activity recognition approaches, an agreement on a set of standard evaluation methods is needed. Due to the diversity of the sensors and other hardware employed, designing and accepting standard tests is a difficult task. This article presents an initiative to evaluate activity recognition systems: a living-lab evaluation established through an annual competition āˆ’ EvAAL-AR (Evaluating Ambient Assisted Living Systems through Competitive Benchmarking āˆ’ Activity Recognition). In the competition, each team brings their own activity-recognition system, which is evaluated live on the same activity scenario performed by an actor. The evaluation criteria attempt to capture the practical usability: recognition accuracy, user acceptance, recognition delay, installation complexity, and interoperability with ambient assisted living systems. The article also presents the competing systems with emphasis on two best-performing ones: (i) a system that achieved the best recognition accuracy, and (ii) a system that was evaluated as the best overall. Finally, the article presents lessons learned from the competition and ideas for future development of the competition and of the activity recognition field in general

    Wild by Design: Workshop on Designing Ubiquitous Health Monitoring Technologies for Challenging Environments

    Get PDF
    Recent years have shown an emergence of ubiquitous technologies that aim to monitor a personā€™s health in their day to day. However, albeit focused at a real world setting and technically able, most research is still limited in its real-world coverage, suitability, and adoption. In this workshop, we will focus on the challenges of real world health monitoring deployments to produce forward-looking insights that can shape the way researchers and practitioners think about health monitoring, in platforms and systems that account for the complex environments where they are bound to be used

    INLIFE - independent living support functions for the elderly : technology and pilot overview

    Get PDF
    In this paper, we present the European H2020 project INLIFE (INdependent LIving support Functions for the Elderly). The project brought together 20 partners from nine countries with the goal of integrating into a common ICT platform a range of technologies intended to assist community-dwelling older people with cognitive impairment. The majority of technologies existed prior to INLIFE and a key goal was to bring them together in one place along with a number of new applications to provide a comprehensive set of services. The range of INLIFE services fell into four broad areas: Independent Living Support, Travel Support, Socialization and Communication Support and Caregiver Support. These included security applications, services to facilitate interactions with formal and informal caregivers, multilingual conversation support, web-based physical exercises, teleconsultations, and support for transport navigation. In total, over 2900 people participated in the project; they included elderly adults with cognitive impairment, informal caregivers, healthcare professionals, and other stakeholders. The aim of the study was to assess whether there was improvement/stabilization of cognitive/emotional/physical functioning, as well as overall well-being and quality of life of those using the INLIFE services, and to assess user acceptance of the platform and individual services. The results confirm there is a huge interest and appetite for technological services to support older adults living with cognitive impairment in the community. Different services attracted different amounts of use and evaluation with some proving extremely popular while others less so. The findings provide useful information on the ways in which older adults and their families, health and social care services and other stakeholders wish to access technological services, what sort of services they are seeking, what sort of support they need to access services, and how these services might be funded

    How Accurately Can Your Wrist Device Recognize Daily Activities and Detect Falls?

    No full text
    Although wearable accelerometers can successfully recognize activities and detect falls, their adoption in real life is low because users do not want to wear additional devices. A possible solution is an accelerometer inside a wrist device/smartwatch. However, wrist placement might perform poorly in terms of accuracy due to frequent random movements of the hand. In this paper we perform a thorough, large-scale evaluation of methods for activity recognition and fall detection on four datasets. On the first two we showed that the left wrist performs better compared to the dominant right one, and also better compared to the elbow and the chest, but worse compared to the ankle, knee and belt. On the third (Opportunity) dataset, our method outperformed the related work, indicating that our feature-preprocessing creates better input data. And finally, on a real-life unlabeled dataset the recognized activities captured the subjectā€™s daily rhythm and activities. Our fall-detection method detected all of the fast falls and minimized the false positives, achieving 85% accuracy on the first dataset. Because the other datasets did not contain fall events, only false positives were evaluated, resulting in 9 for the second, 1 for the third and 15 for the real-life dataset (57 days data)

    Breathing Rate Estimation from Head-Worn Photoplethysmography Sensor Data Using Machine Learning

    No full text
    Breathing rate is considered one of the fundamental vital signs and a highly informative indicator of physiological state. Given that the monitoring of heart activity is less complex than the monitoring of breathing, a variety of algorithms have been developed to estimate breathing activity from heart activity. However, estimating breathing rate from heart activity outside of laboratory conditions is still a challenge. The challenge is even greater when new wearable devices with novel sensor placements are being used. In this paper, we present a novel algorithm for breathing rate estimation from photoplethysmography (PPG) data acquired from a head-worn virtual reality mask equipped with a PPG sensor placed on the forehead of a subject. The algorithm is based on advanced signal processing and machine learning techniques and includes a novel quality assessment and motion artifacts removal procedure. The proposed algorithm is evaluated and compared to existing approaches from the related work using two separate datasets that contains data from a total of 37 subjects overall. Numerous experiments show that the proposed algorithm outperforms the compared algorithms, achieving a mean absolute error of 1.38 breaths per minute and a Pearsonā€™s correlation coefficient of 0.86. These results indicate that reliable estimation of breathing rate is possible based on PPG data acquired from a head-worn device

    Accelerometer Placement for Posture Recognition and Fall Detection

    No full text
    Abstractā€”This paper presents an approach to fall detection with accelerometers that exploits posture recognition to identify postures that may be the result of a fall. Posture recognition as a standalone task was also studied. Nine placements of up to four sensors were considered: on the waist, chest, thigh and ankle. The results are compared to the results of a system using ultrawideband location sensors on a scenario consisting of events difficult to recognize as falls or non-falls. Three accelerometers proved sufficient to correctly recognize all the events except one (a slow fall). The location-based system was comparable to two accelerometers, except that it was able to recognize the slow fall because it resulted in lying outside the bed, whose location was known to the system. One accelerometer was able to recognize only the most clear-cut fall. Two accelerometers achieved over 90 % accuracy of posture recognition, which was better than the location-based system. Chest and waist accelerometers proved best at both tasks, with the chest accelerometer having a slight advantage in posture recognition
    corecore